risk and opportunity
Risks and Opportunities in Human-Machine Teaming in Operationalizing Machine Learning Target Variables
Guo, Mengtian, Gotz, David, Wang, Yue
Predictive modeling has the potential to enhance human decision-making. However, many predictive models fail in practice due to problematic problem formulation in cases where the prediction target is an abstract concept or construct and practitioners need to define an appropriate target variable as a proxy to operationalize the construct of interest. The choice of an appropriate proxy target variable is rarely self-evident in practice, requiring both domain knowledge and iterative data modeling. This process is inherently collaborative, involving both domain experts and data scientists. In this work, we explore how human-machine teaming can support this process by accelerating iterations while preserving human judgment. We study the impact of two human-machine teaming strategies on proxy construction: 1) relevance-first: humans leading the process by selecting relevant proxies, and 2) performance-first: machines leading the process by recommending proxies based on predictive performance. Based on a controlled user study of a proxy construction task (N = 20), we show that the performance-first strategy facilitated faster iterations and decision-making, but also biased users towards well-performing proxies that are misaligned with the application goal. Our study highlights the opportunities and risks of human-machine teaming in operationalizing machine learning target variables, yielding insights for future research to explore the opportunities and mitigate the risks.
- North America > United States > New York > New York County > New York City (0.14)
- North America > United States > North Carolina (0.04)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- (14 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (1.00)
- Health & Medicine > Therapeutic Area > Immunology (0.69)
- Education > Educational Setting (0.68)
Hanna Barakat's image collection & the paradoxes of depicting diversity in AI history
As part of a collaboration between Better Images of AI and Cambridge University's Diversity Fund, Hanna Barakat was commissioned to create a digital collage series to depict diverse images about the learning and education of AI at Cambridge. Hanna's series of images complement our competition that we opened up to the public at the end of last year which invited submissions for better images of AI from the wider community – you can see the winning entries here. Hanna provides her thoughts on the challenges of creating images that communicate about AI histories and the inherent contradictions that arise when engaging in this work. As outlined by the Better Images of AI project, normative depictions of AI continue to perpetuate negative gender and racial stereotypes about the creators, users, and beneficiaries of AI. The lack of diversity--and the problematic interpretation of diversity--in AI-generated images is not merely an'output' issue that can be easily fixed.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.25)
- North America > United States > California > Los Angeles County > Los Angeles (0.15)
- Europe > United Kingdom > England > Buckinghamshire > Milton Keynes (0.06)
- (2 more...)
AI Governance through Markets
Tomei, Philip Moreira, Jain, Rupal, Franklin, Matija
This paper argues that market governance mechanisms should be considered a key approach in the governance of artificial intelligence (AI), alongside traditional regulatory frameworks. While current governance approaches have predominantly focused on regulation, we contend that market-based mechanisms offer effective incentives for responsible AI development. We examine four emerging vectors of market governance: insurance, auditing, procurement, and due diligence, demonstrating how these mechanisms can affirm the relationship between AI risk and financial risk while addressing capital allocation inefficiencies. While we do not claim that market forces alone can adequately protect societal interests, we maintain that standardised AI disclosures and market mechanisms can create powerful incentives for safe and responsible AI development. This paper urges regulators, economists, and machine learning researchers to investigate and implement market-based approaches to AI governance.
- North America > United States (1.00)
- Europe (1.00)
- Social Sector (1.00)
- Law > Statutes (1.00)
- Law > Intellectual Property & Technology Law (1.00)
- (11 more...)
SusGen-GPT: A Data-Centric LLM for Financial NLP and Sustainability Report Generation
Wu, Qilong, Xiang, Xiaoneng, Huang, Hejia, Wang, Xuan, Jie, Yeo Wei, Satapathy, Ranjan, Filho, Ricardo Shirota, Veeravalli, Bharadwaj
The rapid growth of the financial sector and the rising focus on Environmental, Social, and Governance (ESG) considerations highlight the need for advanced NLP tools. However, open-source LLMs proficient in both finance and ESG domains remain scarce. To address this gap, we introduce SusGen-30K, a category-balanced dataset comprising seven financial NLP tasks and ESG report generation, and propose TCFD-Bench, a benchmark for evaluating sustainability report generation. Leveraging this dataset, we developed SusGen-GPT, a suite of models achieving state-of-the-art performance across six adapted and two off-the-shelf tasks, trailing GPT-4 by only 2% despite using 7-8B parameters compared to GPT-4's 1,700B. Based on this, we propose the SusGen system, integrated with Retrieval-Augmented Generation (RAG), to assist in sustainability report generation. This work demonstrates the efficiency of our approach, advancing research in finance and ESG.
- North America > United States > Louisiana (0.04)
- North America > United States > Mississippi (0.04)
- North America > United States > Arkansas (0.04)
- (5 more...)
- Research Report (0.82)
- Public Relations > Community Relations (0.81)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Banking & Finance > Trading (1.00)
- (4 more...)
Risks and Opportunities of Open-Source Generative AI
Eiras, Francisco, Petrov, Aleksandar, Vidgen, Bertie, Schroeder, Christian, Pizzati, Fabio, Elkins, Katherine, Mukhopadhyay, Supratik, Bibi, Adel, Purewal, Aaron, Botos, Csaba, Steibel, Fabro, Keshtkar, Fazel, Barez, Fazl, Smith, Genevieve, Guadagni, Gianluca, Chun, Jon, Cabot, Jordi, Imperial, Joseph, Nolazco, Juan Arturo, Landay, Lori, Jackson, Matthew, Torr, Phillip H. S., Darrell, Trevor, Lee, Yong, Foerster, Jakob
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education. The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation, in particular from some of the major tech companies who are leading in AI development. This regulation is likely to put at risk the budding field of open-source generative AI. Using a three-stage framework for Gen AI development (near, mid and long-term), we analyze the risks and opportunities of open-source generative AI models with similar capabilities to the ones currently available (near to mid-term) and with greater capabilities (long-term). We argue that, overall, the benefits of open-source Gen AI outweigh its risks. As such, we encourage the open sourcing of models, training and evaluation data, and provide a set of recommendations and best practices for managing risks associated with open-source generative AI.
- Asia > China (0.46)
- Asia > Middle East > Saudi Arabia (0.28)
- North America > Canada (0.14)
- (22 more...)
- Overview (1.00)
- Research Report > Experimental Study (0.45)
- Social Sector (1.00)
- Law > Statutes (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- (12 more...)
Latest AI announcements from the US Government include updated strategic plan
An updated roadmap to focus federal investments in AI research and development (R&D). The National AI R&D Strategic Plan has been updated (for the first time since 2019), and outlines priorities and goals for federal investments in AI R&D. The executive summary of the document notes that: "The federal government must place people and communities at the center by investing in responsible R&D that serves the public good, protects people's rights and safety, and advances democratic values. This update to the National AI R&D Strategic Plan is a roadmap for driving progress toward that goal." The plan reaffirms the eight strategies from the 2019 plan, and adds a ninth.
Biden to meet with experts on AI 'risks and opportunities'
FOX Business correspondent Lydia Hu has the latest on jobs at risk as AI further develops on'America's Newsroom.' President Biden will meet with science and technology advisers on Wednesday to discuss the "risks and opportunities" that artificial intelligence technologies pose for Americans and national security. A White House official said the president would focus on discussing the importance of protecting rights and safety to ensure there are appropriate safeguards and innovation is responsible. Furthermore, Biden will call on Congress to pass bipartisan legislation to protect children and to limit the personal data tech companies collect. The Council of Advisors on Science and Technology, or PCAST, is a federal advisory committee composed of experts outside the federal government charged with making science, technology and innovation policy recommendations to the White House.
- North America > United States > Minnesota > Anoka County > Fridley (0.06)
- North America > United States > District of Columbia > Washington (0.06)
- Europe > Germany > Berlin (0.06)
AIPM - Roux Institute at Northeastern University
The Symposium on Risks and Opportunities of AI in Clinical Drug Development is an event jointly sponsored by Pfizer Inc., Northeastern University, the American Statistical Association (ASA), the Statistics Department and Data Science Institute at Columbia University, and OHDSI. This event is designed to serve as a platform for distinguished statisticians, data scientists, regulators, and other professionals to address the challenges and opportunities of AI in pharmaceutical medicine; to foster collaboration among industry, academia, regulatory agencies, and professional associations; and to propose recommendations with policy implications for proper implementation of AI in promoting public health. As a convener of researchers in the fields of AI, data science, biotechnology, computational medicine, and more, industry partners, academic faculty, and entrepreneurs, the Roux Institute at Northeastern University is uniquely positioned to host this event and connect the experts needed to tackle the challenges and opportunities presented. The Roux Institute at Northeastern University is a center of activity for both the Observational Health Data Sciences and Informatics Center (OHDSI), which advances healthcare by fostering reproducible research through open science, and the Institute for Experiential Artificial Intelligence (IEAI), which researches and develops human-centric AI solutions that leverage machine technology to extend human intelligence. Co-sponsored by Pfizer, the American Statistical Association, Northeastern University, and Columbia University, the Roux Institute at Northeastern University is pleased to host this symposium focused on Risks and Opportunities in Clinical Drug Development.
NATO and US DoD AI Strategies Align with over 80 International Declarations on AI Ethics
In October, we included NATO's release of its first-ever strategy for artificial intelligence in the OODA Loop Daily Pulse. The strategy is primarily concerned with the impact AI will have on the NATO core commitments of collective defense, crisis management, and cooperative security. Worth a deeper dive is a framework within the overall NATO AI Strategy, which mirrors that of the DoD Joint Artificial Intelligence Center's (JAIC) and other U.S.-based efforts to establish norms around AI: "NATO establishes standards of responsible use of AI technologies, in accordance with international law and NATO's values." At the center of the NATO AI strategy are the "NATO Principles of Responsible Use of Artificial Intelligence in Defence," which are based on the NATO and Allies commitment to "ensuring that the AI applications they develop and consider for deployment will be – at the various stages of their lifecycles – in accordance with the following six principles: Lawfulness, Responsibility and Accountability, Explainability and Traceability, Reliability, Governability, and Bias Mitigation." OODA Loop provides actionable intelligence, analysis, and insight on global security, technology, and business issues.
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
Artificial Intelligence: Major Legal Discussions, Risks and Opportunities
Artificial intelligence is a hot topic having effect in many industries. This webinar will present an overview of legal discussions on artificial intelligence through the lens of current developments by government actors. The focus will be on global legal discussions, concerns, risks and opportunities that artificial intelligence poses on various industries including but not limited to mobilization, smart cities, surveillance, industrial data, and health-tech.
- Overview (1.00)
- Instructional Material > Course Syllabus & Notes (1.00)